19 research outputs found

    Vertex Downgrading to Minimize Connectivity

    Get PDF
    We consider the problem of interdicting a directed graph by deleting nodes with the goal of minimizing the local edge connectivity of the remaining graph from a given source to a sink. We introduce and study a general downgrading variant of the interdiction problem where the capacity of an arc is a function of the subset of its endpoints that are downgraded, and the goal is to minimize the downgraded capacity of a minimum source-sink cut subject to a node downgrading budget. This models the case when both ends of an arc must be downgraded to remove it, for example. For this generalization, we provide a bicriteria (4,4)-approximation that downgrades nodes with total weight at most 4 times the budget and provides a solution where the downgraded connectivity from the source to the sink is at most 4 times that in an optimal solution. We accomplish this with an LP relaxation and rounding using a ball-growing algorithm based on the LP values. We further generalize the downgrading problem to one where each vertex can be downgraded to one of k levels, and the arc capacities are functions of the pairs of levels to which its ends are downgraded. We generalize our LP rounding to get a (4k,4k)-approximation for this case

    Randomized Contractions for Multiobjective Minimum Cuts

    Get PDF
    We show that Karger\u27s randomized contraction method (SODA 93) can be adapted to multiobjective global minimum cut problems with a constant number of edge or node budget constraints to give efficient algorithms. For global minimum cuts with a single edge-budget constraint, our extension of the randomized contraction method has running time tilde{O}(n^3) in an n-node graph improving upon the best-known randomized algorithm with running time tilde{O}(n^4) due to Armon and Zwick (Algorithmica 2006). Our analysis also gives a new upper bound of O(n^3) for the number of optimal solutions for a single edge-budget min cut problem. For the case of (k-1) edge-budget constraints, the extension of our algorithm saves a logarithmic factor from the best-known randomized running time of O(n^{2k} log^3 n). A main feature of our algorithms is to adaptively choose, at each step, the appropriate cost function used in the random selection of edges to be contracted. For the global min cut problem with a constant number of node budgets, we give a randomized algorithm with running time tilde{O}(n^2), improving the current best determinisitic running time of O(n^3) due to Goemans and Soto (SIAM Journal on Discrete Mathematics 2013). Our method also shows that the total number of distinct optimal solutions is bounded by O(n^2) as in the case of global min-cuts. Our algorithm extends to the node-budget constrained global min cut problem excluding a given sink with the same running time and bound on number of optimal solutions, again improving upon the best-known running time by a factor of O(n). For node-budget constrained problems, our improvements arise from incorporating the idea of merging any infeasible super-nodes that arise during the random contraction process. In contrast to cuts excluding a sink, we note that the node-cardinality constrained min-cut problem containing a given source is strongly NP-hard using a reduction from graph bisection

    Approximation et résolution des versions min-max et min-max regret de problèmes d'optimisation combinatoire

    No full text
    En théorie de la décision, des approches, basées sur la résolution des versions min-max (regret) de problèmes d optimisation, sont souvent utilisées en vue d obtenir des solutions qui ont un bon comportement dans le pire cas. La complexité de ces problèmes a été étudiée de manière approfondie au cours de la dernière décennie. Nous présentons certains résultats complémentaires de complexité et nous initions l étude de l approximation des versions min-max (regret) de plusieurs problèmes classiques tels que le plus court chemin, l arbre couvrant et le sac à dos, pour lesquels nous présentons des résultats positifs et négatifs.Outre cette étude théorique, nous nous intéressons à une application du critère de regret maximum et, d une manière générale, des approches robustes au problème d association de données. Formellement, le problème peut se modéliser comme un problème d affectation multidimensionnelle. Compte tenu des diverses sources d imprécision, le modèle n est pas souvent pertinent. Nous montrons qu il est utile d évaluer les coefficients de la fonction objectif à l aide d intervalles au lieu d utiliser les valeurs les plus vraisemblables. Différentes stratégies sont étudiées pour résoudre ce problème et des exemples numériques sont proposés pour démontrer l efficacité de notre approche.PARIS-DAUPHINE-BU (751162101) / SudocSudocFranceF

    Robustesse en aide multicritère à la décision

    No full text
    Cahiers du Lamsade, n° 276, juillet 2008Après avoir précisé le sens que l'on donne dans ce chapitre à quelques termes (robustesse, résultat, procédure, méthode,...), on met en évidence les principales caractéristiques de la majorité des publications qui traitent de robustesse. Ce n'est qu'après avoir apporté des éléments de réponse à la question « pourquoi la robustesse est un sujet de préoccupation en aide multicritère à la décision (AMCD) ? » (cf. section 2) que l'on précise le plan de ce chapitre. On introduit au préalable le concept de jeux de valeurs, lequel sert de trait d'union entre ce que l'on définit comme la représentation formelle du problème et la réalité vécue. On introduit également cinq problèmes types qui nous serviront de référence par la suite. La section 3 traite des approches récentes qui font intervenir un unique critère de robustesse qui vient compléter (mais non lui être substitué) un système de préférences préalablement défini indépendamment de la préoccupation de la robustesse. La section suivante concerne le cas où la préoccupation de robustesse est modélisée à l'aide de plusieurs critères. La dernière section, avant la conclusion, concerne les approches dans lesquelles la robustesse est appréhendée autrement qu'en explicitant un ou plusieurs critères destinés à comparer des solutions. Ces approches conduisent généralement à faire intervenir une ou plusieurs propriétés destinées à caractériser les solutions qualifiées de robustes ou encore à asseoir des conclusions robustes. Dans ces trois dernières sections, diverses nouvelles voies de recherche sont envisagées.After bringing precisions to the meaning we give to several of the terms used in this paper (e.g., robustness, result, procedure, method), we highlight the principal characteristics of most of the publications about robustness. Subsequently, we present several partial responses to the question “why robustness is a matter of interest in Multi-Criteria Decision Aiding (MCDA)?” (see Section 2). Only then do we provide an outline for this paper. At this point, we introduce the concept of variable settings, which serves to connect what we define as the formal representation of the decision aiding problem and the real-life decisional context. We then introduce five typical problems that will serve as reference problems in the rest of the paper. Section 3 deals with recent approaches that involve a single robustness criterion completing (but not replacing) a preference system that has been defined previously, independently of the robustness concern. The following section deals with approaches in which the robustness concern is modelled using several criteria. Section 5 deals with the approaches in which robustness is considered other than by using one or several criteria to compare the solutions. These approaches generally involve using one or several properties destined to characterize the robust solution or to draw robust conclusions. In these last three sections, diverse new directions for research are envisioned.ou

    Complexity of the min-max (regret) versions of min cut problems

    Get PDF
    This paper investigates the complexity of the min-max and min-max regret versions of the min s-t cut and min cut problems. Even if the underlying problems are closely related and both polynomial, the complexity of their min-max and min-max regret versions, for a constant number of scenarios, is quite contrasted since they are respectively strongly NPhard and polynomial. However, for a non constant number of scenarios, these versions become strongly NP-hard for both problems. In the interval scenario case, min-max versions are trivially polynomial. Moreover, for min-max regret versions, we obtain the same contrasted result as for a constant number of scenarios: min-max regret min s-t cut is strongly NP-hard whereas min-max regret min cut is polynomial

    Complexity of the min-max (regret) versions of cut problems

    No full text
    This paper investigates the complexity of the min-max and min-max regret versions of the s−t min cut and min cut problems. Even if the underlying problems are closely related and both polynomial, we show that the complexity of their min-max and min-max regret versions, for a constant number of scenarios, are quite contrasted since they are respectively strongly NP-hard and polynomial. Thus, we exhibit the first polynomial problem, s−t min cut, whose min-max (regret) versions are strongly NP-hard. Also, min cut is one of the few polynomial problems whose min-max (regret) versions remain polynomial. However, these versions become strongly NP-hard for a non constant number of scenarios. In the interval data case, min-max versions are trivially polynomial. Moreover, for min-max regret versions, we obtain the same contrasted result as for a constant number of scenarios: min-max regret s − t cut is strongly NP-hard whereas min-max regret cut is polynomial

    Pseudo-polynomial algorithms for min-max and min-max regret problems

    No full text
    Abstract We present in this paper general pseudo-polynomial time algorithms to solve min-max and min-max regret versions of some polynomial or pseudo-polynomial problems under a constant number of scenarios. Using easily computable bounds, we can improve these algorithms. This way we provide pseudopolynomial algorithms for the min-max and and min-max regret versions of several classical problems including minimum spanning tree, shortest path, and knapsack. min-max, min-max regret, computational complexity, pseudo
    corecore